已知应用于任务序列的标准梯度下降算法可在深层神经网络中产生灾难性遗忘。当对序列中的新任务进行培训时,该模型会在当前任务上更新其参数,从而忘记过去的知识。本文探讨了我们在有限环境中扩展任务数量的方案。这些方案由与重复数据的长期任务组成。我们表明,在这种情况下,随机梯度下降可以学习,进步并融合到根据现有文献需要持续学习算法的解决方案。换句话说,我们表明该模型在没有特定的记忆机制的情况下执行知识保留和积累。我们提出了一个新的实验框架,即Scole(缩放量表),以研究在潜在无限序列中的知识保留和算法的积累。为了探索此设置,我们对1,000个任务的序列进行了大量实验,以更好地了解这种新的设置家庭。我们还提出了对香草随机梯度下降的轻微修改,以促进这种情况下的持续学习。 SCOLE框架代表了对实用训练环境的良好模拟,并允许长序列研究收敛行为。我们的实验表明,在短方案上以前的结果不能总是推断为更长的场景。
translated by 谷歌翻译
大规模预训练的快速开发导致基础模型可以充当各种下游任务和领域的有效提取器。在此激励的情况下,我们研究了预训练的视觉模型的功效,作为下游持续学习(CL)场景的基础。我们的目标是双重的。首先,我们想了解RAW-DATA空间中CL和预训练编码器的潜在空间之间CL之间的计算准确性权衡。其次,我们研究编码器的特征,训练算法和数据以及所得的潜在空间如何影响CL性能。为此,我们将各种预训练的模型在大规模基准测试方案中的功效与在潜在和原始数据空间中应用的香草重播设置的功效。值得注意的是,这项研究表明了转移,遗忘,任务相似性和学习如何取决于输入数据特征,而不一定取决于CL算法。首先,我们表明,在某些情况下,通过可忽略的计算中的非参数分类器可以很容易地实现合理的CL性能。然后,我们展示模型如何在更广泛的数据上进行预训练,从而为各种重播大小提供更好的性能。我们以这些表示形式的代表性相似性和传递属性来解释这一点。最后,与训练域相比,我们显示了自我监督预训练对下游域的有效性。我们指出并验证了几个研究方向,这些方向可以进一步提高潜在CL的功效,包括表示结合。本研究中使用的各种数据集可以用作进一步CL研究的计算效率游乐场。该代码库可在https://github.com/oleksost/latent_cl下获得。
translated by 谷歌翻译
这项工作介绍了一种新颖的原则,我们通过机制稀疏正规调用解剖学,基于高级概念的动态往往稀疏的想法。我们提出了一种表示学习方法,可以通过同时学习与它们相关的潜在因子和稀疏因果图形模型来引起解剖学。我们开发了一个严谨的可识别性理论,建立在最近的非线性独立分量分析(ICA)结果中,结果是模拟这一原理,并展示了如何恢复潜在变量,如果一个规则大致潜在机制为稀疏,如果某些图形连接标准通过数据生成过程满足。作为我们框架的特殊情况,我们展示了如何利用未知目标的干预措施来解除潜在因子,从而借鉴ICA和因果关系之间的进一步联系。我们还提出了一种基于VAE的方法,其中通过二进制掩码来学习和正规化潜在机制,并通过表明它学会在模拟中的解散表示来验证我们的理论。
translated by 谷歌翻译
The integration of data and knowledge from several sources is known as data fusion. When data is available in a distributed fashion or when different sensors are used to infer a quantity of interest, data fusion becomes essential. In Bayesian settings, a priori information of the unknown quantities is available and, possibly, shared among the distributed estimators. When the local estimates are fused, such prior might be overused unless it is accounted for. This paper explores the effects of shared priors in Bayesian data fusion contexts, providing fusion rules and analysis to understand the performance of such fusion as a function of the number of collaborative agents and the uncertainty of the priors. Analytical results are corroborated through experiments in a variety of estimation and classification problems.
translated by 谷歌翻译
Methods of pattern recognition and machine learning are applied extensively in science, technology, and society. Hence, any advances in related theory may translate into large-scale impact. Here we explore how algorithmic information theory, especially algorithmic probability, may aid in a machine learning task. We study a multiclass supervised classification problem, namely learning the RNA molecule sequence-to-shape map, where the different possible shapes are taken to be the classes. The primary motivation for this work is a proof of concept example, where a concrete, well-motivated machine learning task can be aided by approximations to algorithmic probability. Our approach is based on directly estimating the class (i.e., shape) probabilities from shape complexities, and using the estimated probabilities as a prior in a Gaussian process learning problem. Naturally, with a large amount of training data, the prior has no significant influence on classification accuracy, but in the very small training data regime, we show that using the prior can substantially improve classification accuracy. To our knowledge, this work is one of the first to demonstrate how algorithmic probability can aid in a concrete, real-world, machine learning problem.
translated by 谷歌翻译
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.
translated by 谷歌翻译
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
translated by 谷歌翻译
互动主义模型引入了一种动态的语言,交流和认知方法。在这项工作中,我们在对话对话系统(SDS)的对话建模的背景下探讨了这一基本理论。为了扩展这样的理论框架,我们提出了一组设计原则,这些设计原则遵守中央心理语言和交流理论,以实现SDS中的互动主义。通过这些,关键思想可以构成我们提出的设计原则的基础。
translated by 谷歌翻译
长期以来,部署能够探索未知环境的自动驾驶机器人一直是与机器人社区有很大相关性的话题。在这项工作中,我们通过展示一个开源的活动视觉猛烈框架来朝着这个方向迈出一步基础姿势图提供的结构。通过仔细估计后验加权姿势图,在线实现了D-最佳决策,目的是在发生探索时改善本地化和映射不确定性。
translated by 谷歌翻译
神经网络的一种众所周知的故障模式对应于高置信度错误的预测,尤其是对于训练分布有所不同的数据。这种不安全的行为限制了其适用性。为此,我们表明可以通过在其内部表示中添加约束来定义提供准确置信度的模型。也就是说,我们将类标签编码为固定的唯一二进制向量或类代码,并使用这些标签来在整个模型中强制执行依赖类的激活模式。结果预测因子被称为总激活分类器(TAC),而TAC用作基础分类器的附加组件,以指示预测的可靠性。给定数据实例,TAC切片中间表示分为不相交集,并将此类切片减少到标量中,从而产生激活曲线。在培训期间,将激活轮廓推向分配给给定培训实例的代码。在测试时,可以预测与最匹配示例激活曲线的代码相对应的类。从经验上讲,我们观察到激活模式及其相应代码之间的相似之处导致一种廉价的无监督方法来诱导歧视性置信度得分。也就是说,我们表明TAC至少与从现有模型中提取的最新置信度得分一样好,同时严格改善了模型在拒绝设置上的价值。还观察到TAC在多种类型的架构和数据模式上都很好地工作。
translated by 谷歌翻译